Publish agent errors as events |
您所在的位置:网站首页 › autoitlibrary 教程 › Publish agent errors as events |
Since getting pod logs is dependent on the agent, troubleshooting a disconnected agent is difficult. One way to address this would be to publish k8s events for agent errors. If folks think this is a good idea I'd be happy to implement. Happy to discuss. Would not help if you were using https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2025-extend-konnectivity-for-both-directions but thats not a deal breaker. Were you thinking the agent wrote to the status on its own Pod resource? Enhancing something like https://github.com/kubernetes/node-problem-detector? I was thinking about creating events for human consumption i.e. by describing the agent pods. But I like your idea of using a condition since it could also be used as a readiness gate. If there are no objections I'll put together a PR. Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules: After 90d of inactivity, lifecycle/stale is appliedAfter 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is appliedAfter 30d of inactivity since lifecycle/rotten was applied, the issue is closedYou can: Mark this issue or PR as fresh with /remove-lifecycle rottenClose this issue or PR with /closeOffer to help out with Issue TriagePlease send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules: After 90d of inactivity, lifecycle/stale is appliedAfter 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is appliedAfter 30d of inactivity since lifecycle/rotten was applied, the issue is closedYou can: Reopen this issue or PR with /reopenMark this issue or PR as fresh with /remove-lifecycle rottenOffer to help out with Issue TriagePlease send feedback to sig-contributor-experience at kubernetes/community. /close @k8s-triage-robot: Closing this issue. In response to this: The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules: After 90d of inactivity, lifecycle/stale is appliedAfter 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is appliedAfter 30d of inactivity since lifecycle/rotten was applied, the issue is closedYou can: Reopen this issue or PR with /reopenMark this issue or PR as fresh with /remove-lifecycle rottenOffer to help out with Issue TriagePlease send feedback to sig-contributor-experience at kubernetes/community. /close Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |